perm filename BELIEF.AI[ESS,JMC] blob sn#005560 filedate 1972-02-26 generic text, type T, neo UTF8
00100	ARE BELIEFS AN EPI-PHENOMENON?
00200	
00300	
00400		I  shall  compare  two ways of representing mental phenomena:
00500	the stimulus-response way and a way in which the  concept  of  belief
00600	plays a basic role.
00700	
00800		The  stimulus-response  model  is  more  familiar and more in
00900	accordance with the course of psychology in  the  20th  century.   It
01000	regards  the  brain  as  a  black  box that receives inputs and emits
01100	outputs in accordance with its structure.  The business of psychology
01200	is  to  study  the  structure  of the input-output relations, and the
01300	business of artificial intelligence  is  to  construct  systems  with
01400	input  output  relations adequate to perform certain tasks.  It turns
01500	out that the outputs depend on past inputs in such a complex way that
01600	most  scientists prefer to use brain states as intermediate variables
01700	in describing input-output  relations.   The  mathematical  formalism
01800	that  describes  this  is  automata theory.  An automaton is a system
01900	whose internal state changes to new values  in  accordance  with  the
02000	inputs  and  the  previous  state.  While the concept of automaton is
02100	basic for this approach, automata theory as a branch  of  mathematics
02200	has  mainly  been  concerned  with  questions  not  relevant  to  the
02300	potential  psychological  applications.   The   work   in   heuristic
02400	programming  can  mainly be considered to follow this model, although
02500	the state spaces  are  described  in  terms  of  data  structures  in
02600	computer  memory rather than in the undifferentiated form of automata
02700	theory.
02800	
02900		An alternative view is to consider a human being or  a  robot
03000	as  having  certain  beliefs  about  the  world, about the effects of
03100	actions, about other persons and their goals, about  the  quality  of
03200	its  own knowledge and ways of obtaining it, and about its goals. The
03300	object of psychology is then to study these mental structures and the
03400	object  of  artificial  intelligence  is to design mental structures.
03500	This view has its roots  in  common  experience  and  was  common  in
03600	psychology  before  the  behavorist  revolution  of  the  early  20th
03700	century.  The purpose of this article is to point  out  some  of  its
03800	advantages.
03900	
04000		In the first place, the two views are not contradictory about
04100	matters of fact.  To assert that John believes that Fred is  at  home
04200	is  to  assert  something  about  the  state  of  John's brain, and a
04300	behaviorist  will  undertake  to  translate  the  assertion  into  an
04400	assertion  about  John's  propensity to behave in given ways.  On the
04500	other hand, a psychologist or a  computer  scientist  who  thinks  in
04600	terms  of  beliefs  may  make different kinds of theories or programs
04700	than one who thinks in terms of stimulus and response or input-output
04800	relations.  I contend that the question of which view is better is an
04900	empirical one and that  the  mentalistic  view  is  better  for  many
05000	purposes.
05100	
05200		A major argument used to establish the S-R model and demolish
05300	the mentalistic model was philosophical.   It  was  argued  that  the
05400	mental  concepts  in  use  at  the  time  were vague and meaningless,
05500	because they had no  direct  connection  with  observable  phenomena.
05600	This  may  have  been  true  of many nineteenth century psychological
05700	concepts; I don't know enough about them to form an opinion.   It  is
05800	also  true  that  mankind  has  occasionally  built huge intellectual
05900	structures that lost connection with the empirical  facts  they  were
06000	supposedly  relevant  to. However, the suggested cure of building all
06100	theories in terms of directly  observable  entities  has  not  worked
06200	well.   Unfortunately,  the logically basic facts of the universe are
06300	often not directly observable. There is  no  logical  necessity  that
06400	when  an organism evolves intelligence, it will simultaneously evolve
06500	the ability to directly observe fundamental phenomena.  In fact,  the
06600	development  of  physics  with  its  concept  of  fields, fundamental
06700	particles, symmetries, etc.  has  gotten  farther  and  farther  from
06800	direct  observability  even though its ability to predict and control
06900	observable physical phenomena has simultaneously increased.  Thus, it
07000	has turned out to be necessary to live somewhat dangerously.  We have
07100	to construct theories whose basic concepts and axioms are  quite  far
07200	from observation.  We often have to live a long time worried that one
07300	of our favorite concepts may be meaningless. In truth, if a theory is
07400	connected  with  observation,  it  is  usually possible to reword the
07500	theory so that the basic concepts are definable operationally,  often
07600	in  a complex way.  This is often useful, because it may suggest that
07700	some of the theory is expendable, and some philosophers advocate that
07800	it  always  be  done.   However,  when  a theory is changed, it often
07900	happens  that  the  relations  among  the  basic   concepts   remains
08000	unchanged,   but   the   connection   with   observation  is  changed
08100	drastically.  I have in mind the changing definitions of the units of
08200	time and length.
08300	
08400		Now  let  me  make a plausibility argument for the utility of
08500	mentalistic concepts in psychology and artificial  intelligence.  The
08600	human  brain  has evolved an ability to get along in our physical and
08700	social world, and the  computer  scientist  tries  to  make  programs
08800	capable  of  performing  tasks  in it.  One of the characteristics of
08900	this world is that not much of it is simultaneously  observable,  and
09000	parts  of  it  that  are  important  to  some  people are often never
09100	observed by these people.  Our  ways  of  getting  information  about
09200	these  aspects  of  the  world  vary,  and  our  attitude towards the
09300	information we get varies.  The source of much of our information  is
09400	what other people say or what we read written by people we have never
09500	met.  Many important facts and relationships  are  much  more  stable
09600	than our ways of getting information about them.
09700	
09800		For  example,  consider  a  person's bank balance.  This is a
09900	number is obtained in  various  ways:  reading  the  bank  statement,
10000	telephoning  the bank, computing ones checks and deposits.  People go
10100	to much trouble to raise it and  refrain  from  otherwise  attractive
10200	activities  to avoid lowering it.  My experience has been that when I
10300	want something from a store, they will let me have it  if  they  give
10400	them  a check.  No-one has ever put me in jail after I wrote a check.
10500	Thus I have no direct experience confirming my belief that if I  give
10600	this  nice  salesman a check for that shiny new twin engine airplane,
10700	something bad will happen.
10800	
10900		That something bad would happen and that most people  believe
11000	something  bad  would happen are pretty certain facts.  They are more
11100	certain  than  anything  we  know  about  how  such  information   is
11200	represented  in  the  human  brain.   That  they  must be represented
11300	somehow in the memory of the computer is more certain  to  the  robot
11400	designer than any decision about precisely how.
11500	
11600		All  this  is  intended  to  justify  the study of beliefs in
11700	psychology and computer science.  It is not intended  to  claim  that
11800	all  information  in a human or a machine consists of beliefs or that
11900	direct input-output relations should not be studied or programmed.
12000	
12100	
12200	BELIEF STRUCTURES
12300	
12400		Very likely, there are many ways of formalizing structures of
12500	belief.  I shall sketch a simple one,  but  without  much  conviction
12600	that this is the best way to do it.
12700	
12800		1.  People (including programs and sometimes animals) believe
12900	sentences.
13000	
13100		2. The sentences are  our  language  not  in  the  believer's
13200	language  if  any.   Thus  we can say that Mao believes "The cultural
13300	revolution was good" without implying that Mao knows English.  We can
13400	also  say "The dog believes his master is holding the ball behind his
13500	back" without implying that the dog uses any language.
13600	
13700		3. Beliefs have implications, and an important  aspect  of  a
13800	belief  structure  is  the  conditions  under  which  consequences of
13900	beliefs are also believed.  Belief sets that are closed under  simple
14000	logical consequence, such as purely propositional consequence, may be
14100	of interest both for describing natural systems and for  constructing
14200	artificial ones.
14300	
14400		Another   possibility   for  belief  systems  is  to  use  an
14500	originally uninterpreted language and map a subset of  its  sentences
14600	into  the  meta-language.   This  has  the advantage that some of the
14700	subjects weirder beliefs can be regarded as meaningless.
14800